Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Profile picture
ScrawnyCrocodile51
Moderator
13 Questions, 33 Answers
  Active since 05 February 2025
  Last activity 4 months ago

Reputation

0

Badges 1

33 × Eureka!
0 Votes
3 Answers
456 Views
0 Votes 3 Answers 456 Views
Hi, I want to setup task that can "overwrite configuration on the UI". Found some reference in the doc but couldn't make it work yet. I feel it should be ver...
6 months ago
0 Votes
2 Answers
508 Views
0 Votes 2 Answers 508 Views
4 months ago
0 Votes
3 Answers
549 Views
0 Votes 3 Answers 549 Views
Hi, i am exploring the following workflow to submit a task to remote server: task = Task.create( branch=branch, repo=repo, script=script, requirements_file=r...
6 months ago
0 Votes
1 Answers
509 Views
0 Votes 1 Answers 509 Views
Hello, I recently start to running into SSLError when using Task.init(): 2025-03-03 12:30:55,981:WARNING:urlopen:Retrying (Retry(total=239, connect=240, read...
6 months ago
0 Votes
16 Answers
625 Views
0 Votes 16 Answers 625 Views
6 months ago
0 Votes
4 Answers
422 Views
0 Votes 4 Answers 422 Views
Hi clearml community, is there a way to install additional packages on top of base docker env when using Task.force_store_standalone_script() and task.execut...
4 months ago
0 Votes
2 Answers
515 Views
0 Votes 2 Answers 515 Views
Hi, is there a configuration somewhere that allow me to see stuff logged from regular logging in clearml console? Or clearml's logger.report_text(print_conso...
6 months ago
0 Votes
5 Answers
556 Views
0 Votes 5 Answers 556 Views
Hi, is there a way to wait until a dataset finish uploading before proceed? because I want to upload dataset if it is not already exist and then process the ...
6 months ago
0 Votes
9 Answers
446 Views
0 Votes 9 Answers 446 Views
5 months ago
0 Votes
2 Answers
476 Views
0 Votes 2 Answers 476 Views
Hi, I am wondering after a task submitted to remote server finishing running. Will the docker container / disk space (really I am more interested about the d...
6 months ago
0 Votes
10 Answers
555 Views
0 Votes 10 Answers 555 Views
5 months ago
0 Votes
2 Answers
491 Views
0 Votes 2 Answers 491 Views
Hello, I am trying to programmatically retrieve the artifact FILE_PATH information that get displayed in the UI. So I have a pandas dataframe uploaded as art...
5 months ago
0 Votes
4 Answers
555 Views
0 Votes 4 Answers 555 Views
Hi clearml team, is there a way to overwrite working_dir when creating task from task.init() workflow? the underlying function I am triggering relying on the...
6 months ago
0 Hi, Is There A Way To Wait Until A Dataset Finish Uploading Before Proceed? Because I Want To Upload Dataset If It Is Not Already Exist And Then Process The Dataset

Roughly I am trying to do this:

def upload_clearml_dataset_from_external_source(
    source_url,
    dataset_name: str,
    dataset_project: str,
):
    # reference: 

    dataset = Dataset.create(dataset_name=dataset_name, dataset_project=dataset_project)
    dataset.add_external_files(source_url=source_url)
    dataset.upload()

    dataset.finalize()


upload_clearml_dataset_from_external_source("
", name, project)

Dataset.get(dataset_project=project, dataset...
6 months ago
4 months ago
0 Hi Clearml Community, Is There A Way To Install Additional Packages On Top Of Base Docker Env When Using

Any followup on this question? Recap:
Task,add_requirements() doesn't seem to do install the package from my experiment

Additionally, as alternative of add_requirements() if I can't get it working, is there an example of using docker bash init script you can point me to

4 months ago
0 Also, Any Advice On Using Best Practice Of Using Task.Create() Instead Of Task.Init()? I Have The Need Of Specifying Docker And Repository, So Only Find Task.Create() Can Achive What I Need. But Then I End Up With Always Creating 2 Scripts For Each Task I
Executing task id [7605f1e5ce6b45e99e9302d93bc3bac6]:
repository = git@xxx
branch = xxx
version_num = 9dca88fa23ff93d446eb2ff7d615d7ade213c8aa
tag = 
docker_cmd = iocr.io/xxx
entry_point = clearml_init.py
working_dir = dev

Based on the logging,
working_dir = dev is the problem. I need to have a way to overwrite the working_dir.

6 months ago
0 Hi Clearml Community, Is There A Way To Install Additional Packages On Top Of Base Docker Env When Using

Hi @<1523701070390366208:profile|CostlyOstrich36> , I tried out Task.add_requirements way to add packages, but it doesn't seem to be working as I expected. here is the snippet i used to setup this up:

Task.force_store_standalone_script()
    add_packages = ["fastparquet"]
    for pkg in add_packages:
        Task.add_requirements(pkg)
    task = Task.init(project_name=project_name, task_name=task_name)
    task.set_base_docker(docker_arguments="--env CLEARML_AGENT_SKIP_PYTHON_ENV_INSTA...
4 months ago
0 Hi Clearml Team, Is There Best Practice To Improve Dataset'S Storage Efficiency? For Example, I Don'T Really Need All 5 Versions Of The Same Dataset Get Saved/Remembered, Is There A Way To Prune Old Versions Of Datasets To Be More Storage Efficient?

Hi John, the dataset.squash doc says "If a set of versions are given it will squash the versions diff into a single version", I want to double check will it only keep the latest version?

Because I don't want any old version stuff even old version have more stuff than the latest version

4 months ago
0 <no title>
PROJECT_NAME = "test"
TASK_NAME = "test_connect"
QUEUE_NAME = "default"
task = Task.init(project_name=PROJECT_NAME, task_name=TASK_NAME)

config = {
    "name": "foo",
    "arg1": "bar",
}
task.connect(config)

task.execute_remotely(queue_name=QUEUE_NAME)
# ------------- end of setup -------------

def dummy_op(config):
    pprint(config)

    return config


dummy_op(config)

Sreenshot also provided to show what "edit" button only appear in user property not hyperparamter
![image](...

6 months ago
0 <no title>

Thanks John! This is exactly it! need to reset the task first then the edit feature will show up

6 months ago
0 Hi Clearml Team, I Am Trying To Figure Out The Best Practice To Keep Track Of Different Models. The Lineage Functionality In The Models Tap Is Generally Helpful. But First Of All, I Need To Be Able To Find Which Project A Model Belongs To, Then Navigate T
m = Model(model_id)
print("Model.id:", m.id) # <- this is returning model_id, but I need project_id or project_name 

print("Model.data:", m.data) # <- AttributeError: 'Model' object has no attribute 'data'
5 months ago
0 Hi Clearml Team, I Am Trying To Figure Out The Best Practice To Keep Track Of Different Models. The Lineage Functionality In The Models Tap Is Generally Helpful. But First Of All, I Need To Be Able To Find Which Project A Model Belongs To, Then Navigate T

Aha I see. that works to retrieve the project id.

A side question, I notice in the clearml design, Task, Model, Dataset those object all have its base model, but not project concept. In terms of project, is there a quick way to retrieve project name based on its id?

5 months ago
0 Hi, I Am Exploring The Following Workflow To Submit A Task To Remote Server:

install in dev mode is the easiest without having to publish it first

6 months ago
0 Hi, I Am Exploring The Following Workflow To Submit A Task To Remote Server:

Sure, essentially my local python project organized using "src layout", look like this:

foo/
    |--src/
        |--module.py
    |--pyproject.toml
    |--clearml_tasks/
        |--task1.py

in the project, it would use absolute import like from foo import module , and I would install foo project in a editable mode during setup.

When I trying to create clearml task and send it to remote server using above way (leverage requirements.txt to configure library dependencies, and pro...

6 months ago
0 Also, Any Advice On Using Best Practice Of Using Task.Create() Instead Of Task.Init()? I Have The Need Of Specifying Docker And Repository, So Only Find Task.Create() Can Achive What I Need. But Then I End Up With Always Creating 2 Scripts For Each Task I

I gave this it a try to switch from Task.create() to Task.init(). I think I am pretty close to switch to using init(). But still have issue of ModuleNotFoundError: No module named 'src' when using task.init().

My project setup look like this:

project_root/
    |--src/
    |--runbooks/
        |--run_task.py

So if I use Task.create(repo=xx, script="runbooks/run_task.py"), it works but if I switch to using Task.init() with the same repo setup (task.set_repo, and then follow by ...

6 months ago
5 months ago
0 Also, Any Advice On Using Best Practice Of Using Task.Create() Instead Of Task.Init()? I Have The Need Of Specifying Docker And Repository, So Only Find Task.Create() Can Achive What I Need. But Then I End Up With Always Creating 2 Scripts For Each Task I
task = Task.init(
        project_name=PROJECT_NAME,
        task_name=TASK_NAME,
        task_type=Task.TaskTypes.data_processing,
    )
    task.set_repo(
        repo="git@xxx.git",
        branch=branch
    )
    task.set_base_docker(
        docker_image="docker-image",
    )
    task.execute_remotely(queue_name=QUEUE_NAME)

This is how I use the task init

6 months ago
0 Hi, Is There A Way To Wait Until A Dataset Finish Uploading Before Proceed? Because I Want To Upload Dataset If It Is Not Already Exist And Then Process The Dataset

I used these setup to load a pretty big dataset from s3:

dataset.add_external_files(
            source_url
        )
        dataset.upload(
            verbose=verbose
        )
        dataset.finalize()

But then seeing error complain about dataset doesn't exist. So my best guess is that the uploading is still happening in the background while the code has move forward to try to do something with that dataset.

So I am questioning if I have to explicitly add some logic to wait f...

6 months ago
Show more results compactanswers